506 research outputs found

    Partial Evaluation for Java Malware Detection

    Get PDF
    The fact that Java is platform independent gives hackers the opportunity to write exploits that can target users on any platform, which has a JVM implementation. Metasploit is a well-known source of Java exploits and to circumvent detection by Anti Virus (AV) software, obfuscation techniques are routinely applied to make an exploit more difficult to recognise. Popular obfuscation techniques for Java include string obfuscation and applying reflection to hide method calls; two techniques that can either be used together or independently. This paper shows how to apply partial evaluation to remove these obfuscations and thereby improve AV matching. The paper presents a partial evaluator for Jimple, which is a typed three-address code suitable for optimisation and program analysis, and also demonstrates how the residual Jimple code, when transformed back into Java, improves the detection rates of a number of commercial AV products

    Sudden Changes and Their Associations with Quality of Life during COVID-19 Lockdown: A Cross-Sectional Study in the French-Speaking Part of Switzerland.

    Get PDF
    The lockdown due to the COVID-19 pandemic has led to various sudden changes in a large number of individuals. In response, the question of how individuals from different social and economic strata cope with those changes has arisen, as well as how much they have affected their mental well-being. Choosing strategies that cope with both the pandemic and the well-being of the population has also been a challenge for different governments. While a large number of studies have investigated the mental health of people from different populations during the COVID-19 pandemic, few have explored the number and type of changes experienced during lockdown by the general population, alongside their relationships with health-related quality of life (HRQoL). To fill this research gap, an observational cross-sectional study on those associations was conducted in the French-speaking part of the Swiss general population. Data were collected from 431 participants during the first four weeks of lockdown due to COVID-19. Multivariate regressions were used to identify the sociodemographic profile of the population that experienced different types and numbers of changes during this period, the association of those changes with the HRQoL-mental and physical-and infection beliefs, and the perception of the governmental measures. We show that the more changes people experienced, the lower their mental HRQoL; however, adherence to governmental measures has helped people to cope with the imposed changes, even though the number of unexpected and unwished changes have strained their mental HRQoL. The low-income population experienced financial difficulties and changes in their food intake more frequently, while dual-citizenship or non-Swiss individuals declared conflictual situations more frequently. Sport practice had a positive association with mental HRQoL; nevertheless, a decrease in sport practice was frequently reported, which correlated with a lower mental HRQoL. Risk perception of COVID-19 increased with lower physical HRQoL score, which supports the efficiency of governmental communication regarding the pandemic. Our results support that government measures should be accompanied by effective and targeted communication about the risk of infection, in order to encourage all strata of the general population to follow such measures and adapt to the changes without unduly affecting their mental health. The usage of such tools might help to reduce the impact of policy-imposed changes on the mental HRQoL of the general population, by inducing voluntary changes in informed and engaged populations

    Gaspar data-centric framework

    Get PDF
    This paper presents the Gaspar data-centric framework to develop high performance parallel applications in Java. Our approach is based on data iterators and on the map pattern of computation. The framework provides an efficient data Application Programming Inter-face(API) that supports flexible data layout and data tiling. Data layout and tiling enable the improvement of data locality, which is essential to foster application scalability in modern multi-core systems. The paper presents the framework data-centric concepts and shows that the performance is comparable to pure Java code.(undefined)info:eu-repo/semantics/publishedVersio

    The use of invasive techniques, angiography and indicator dilution, for quantification of valvular regurgitations

    Get PDF
    Angiographic techniques have been used for the quantification of mitral or aortic and rarely tricuspid regurgitation. Mitral or aortic regurgitant volume per beat and the regurgitation fraction (fao and fm, respectively) are obtained from the angiographic determination of total left ventricular stroke volume (TSV) and forward stroke volume (FSV) estimated by a different technique. Although this procedure is generally accepted as the gold standard for quantification of left heart regurgitations, there are several limitations: In the presence of mitral and aortic regurgitation no separate quantification of fao and fm is feasible; heart rate at the time of determination of FSV (from Fick or dye dilution cardiac output) and of TSV (angio) may be different; there is a tendency to consistently overestimate stroke volume by angio techniques; repeated estimations of TSV by angio are influenced by the circulatory effects of the contrast dye. In contrast indicator dilution techniques, where upstream and downstream sampling allow the simultaneous estimation of forward and regurgitant flow, the accuracy of the determination of FSV is well established and repeated estimations of fao and fm are possible because the indicators do not have cardiovascular effects. These methods are, however, crucially dependent on thorough mixing of the regurgitant volume with the blood in the upstream chamber. In 23 patients with isolated aortic regurgitation there was a positive correlation between fao evaluated by thermodilution and fao determined by the biplane angio-Fick method (r = 0.59). fao by thermodilution averaged 0.40 and fao by angio-Fick 0.46 (NS). In 23 patients with isolated mitral regurgitation there was also a positive correlation between fm determined by thermodilution and fm determined by angio-Fick (r = 0.71). However, fm by thermodilution was consistently smaller than fm by angio-Fick (average values 0.45 and 0.55, respectively, P < 0.005

    Elastic scaling for data stream processing

    Get PDF
    Cataloged from PDF version of article.This article addresses the profitability problem associated with auto-parallelization of general-purpose distributed data stream processing applications. Auto-parallelization involves locating regions in the application's data flow graph that can be replicated at run-time to apply data partitioning, in order to achieve scale. In order to make auto-parallelization effective in practice, the profitability question needs to be answered: How many parallel channels provide the best throughput? The answer to this question changes depending on the workload dynamics and resource availability at run-time. In this article, we propose an elastic auto-parallelization solution that can dynamically adjust the number of channels used to achieve high throughput without unnecessarily wasting resources. Most importantly, our solution can handle partitioned stateful operators via run-time state migration, which is fully transparent to the application developers. We provide an implementation and evaluation of the system on an industrial-strength data stream processing platform to validate our solution

    A catalog of stream processing optimizations

    Get PDF
    Cataloged from PDF version of article.Various research communities have independently arrived at stream processing as a programming model for efficient and parallel computing. These communities include digital signal processing, databases, operating systems, and complex event processing. Since each community faces applications with challenging performance requirements, each of them has developed some of the same optimizations, but often with conflicting terminology and unstated assumptions. This article presents a survey of optimizations for stream processing. It is aimed both at users who need to understand and guide the system's optimizer and at implementers who need to make engineering tradeoffs. To consolidate terminology, this article is organized as a catalog, in a style similar to catalogs of design patterns or refactorings. To make assumptions explicit and help understand tradeoffs, each optimization is presented with its safety constraints (when does it preserve correctness?) and a profitability experiment (when does it improve performance?). We hope that this survey will help future streaming system builders to stand on the shoulders of giants from not just their own community. © 2014 ACM

    Myosin isoenzymes in human hypertrophic hearts. Shift in atrial myosin heavy chains and in ventricular myosin light chains

    Get PDF
    The myosin light chain complement and proteolytic peptide patterns of myosin heavy chains were studied by two-dimensional and one-dimensional electrophoretic techniques respectively, in a total of 57 samples from ventricular and atrial tissues of normal and hypertrophied human hearts. Hypertrophies were classified haemodynamically as due to pressure-overload and volume-overload. In addition to the occurrence of ventricular light chains in hypertrophied atria we also observed the atrial light chain-1 (ALC-1) in hypertrophied ventricular tissues. On average over 6% of total light-chain-1 comprised ALC-1 in pressure-overloaded ventricles and around 3% in volume-overloaded ventricles. In single cases of pressure-overload ALC-1 amounted up to over 20% of total light chain-1. With regard to the myosin heavy chains limited digestion by two different proteinases produced over 200 clearly resoluble peptides. The absence of any detectable differences in the peptide patterns between myosin heavy chains from normal and hypertrophic tissues of left or right ventricle is in line with the findings of J. J. Schier and R. S. Adelstein (J Clin Invest 1982; 69: 816-825). In atrial tissues however, reproducible qualitative differences in the peptide patterns indicated that during hypertrophy a different type of myosin heavy chains becomes expressed. No differences were seen between the myosin heavy chains from normal left and right atri

    Presence-absence versus presence-only modelling methods for predicting bird habitat suitability

    Get PDF
    Habitat suitability models can be generated using methods requiring information on species presence or species presence and absence. Knowledge of the predictive performance of such methods becomes a critical issue to establish their optimal scope of application for mapping current species distributions under different constraints. Here, we use breeding bird atlas data in Catalonia as a working example and attempt to analyse the relative performance of two methods: the Ecological Niche factor Analysis (ENFA) using presence data only and Generalised Linear Models (GLM) using presence/absence data. Models were run on a set of forest species with similar habitat requirements, but with varying occurrence rates (prevalence) and niche positions (marginality). Our results support the idea that GLM predictions are more accurate than those obtained with ENFA. This was particularly true when species were using available habitats proportionally to their suitability, making absence data reliable and useful to enhance model calibration. Species marginality in niche space was also correlated to predictive accuracy, i.e. species with less restricted ecological requirements were modelled less accurately than species with more restricted requirements. This pattern was irrespective of the method employed. Models for wide-ranging and tolerant species were more sensitive to absence data, suggesting that presence/absence methods may be particularly important for predicting distributions of this type of species. We conclude that modellers should consider that species ecological characteristics are critical in determining the accuracy of models and that it is difficult to predict generalist species distributions accurately and this is independent of the method used. Being based on distinct approaches regarding adjustment to data and data quality, habitat distribution modelling methods cover different application areas, making it difficult to identify one that should be universally applicable. Our results suggest however, that if absence data is available, methods using this information should be preferably used in most situations

    SPL: An extensible language for distributed stream processing

    Get PDF
    Big data is revolutionizing how all sectors of our economy do business, including telecommunication, transportation, medical, and finance. Big data comes in two flavors: data at rest and data in motion. Processing data in motion is stream processing. Stream processing for big data analytics often requires scale that can only be delivered by a distributed system, exploiting parallelism on many hosts and many cores. One such distributed stream processing system is IBM Streams. Early customer experience with IBM Streams uncovered that another core requirement is extensibility, since customers want to build high-performance domain-specific operators for use in their streaming applications. Based on these two core requirements of distribution and extensibility, we designed and implemented the Streams Processing Language (SPL). This article describes SPL with an emphasis on the language design, distributed runtime, and extensibility mechanism. SPL is now the gateway for the IBM Streams platform, used by our customers for stream processing in a broad range of application domains. © 2017 ACM

    Tutorial: Stream processing optimizations

    Get PDF
    This tutorial starts with a survey of optimizations for streaming applications. The survey is organized as a catalog that introduces uniform terminology and a common categorization of optimizations across disciplines, such as data management, programming languages, and operating systems. After this survey, the tutorial continues with a deep-dive into the fission optimization, which automatically transforms streaming applications for data-parallelism. Fis-sion helps an application improve its throughput by taking advantage of multiple cores in a machine, or, in the case of a distributed streaming engine, multiple machines in a cluster. While the survey of optimizations covers a wide range of work from the literature, the in-depth discussion of ission relies more heavily on the presenters' own research and experience in the area. The tutorial concludes with a discussion of open research challenges in the field of stream processing optimizations. Copyright © 2013 ACM
    corecore